Interaction and representational integration: evidence from speech errors.

نویسندگان

  • Matthew Goldrick
  • H Ross Baker
  • Amanda Murphy
  • Melissa Baese-Berk
چکیده

We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cross–linguistic Comparison of Refusal Speech Act: Evidence from Trilingual EFL Learners in English, Farsi, and Kurdish

To date, little research on pragmatic transfer has considered a multilingual situation where there is an interaction among three different languages spoken by one person. Of interest was whether pragmatic transfer of refusals among three languages spoken by the same person occurs from L1 and L2 to L3, L1 to L2 and then to L3 or from L1 and L1 (if there are more than one L1) to L2. This study ai...

متن کامل

Dynamic Modeling Approaches for Audiovisual Speech Perception and Multisensory Integration

Multimodal information including auditory, visual and even haptic information is integrated during speech perception. Articulatory information provided by a talker‘s face enhances speech intelligibility in congruent and temporally coincident signals, and produces a perceptual fusion (e.g. the ―McGurk effect‖) when the auditory and visual signals are incongruent. This paper focuses on promising ...

متن کامل

Title of Dissertation : CORTICAL DYNAMICS OF AUDITORY - VISUAL SPEECH : A FORWARD MODEL OF MULTISENSORY INTEGRATION

Title of Dissertation: CORTICAL DYNAMICS OF AUDITORYVISUAL SPEECH: A FORWARD MODEL OF MULTISENSORY INTEGRATION. Virginie van Wassenhove, Ph.D., 2004 Dissertation Directed By: David Poeppel, Ph.D., Department of Linguistics, Department of Biology, Neuroscience and Cognitive Science Program In noisy settings, seeing the interlocutor’s face helps to disambiguate what is being said. For this to hap...

متن کامل

Unification-based Multimodal Integration

Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for mapbased tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by uni cation of typed feature structures representing ...

متن کامل

Uni cation-based Multimodal Integration

Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for map-based tasks. This paper describes a mul-timodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by uniication of typed feature structures representin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Cognition

دوره 121 1  شماره 

صفحات  -

تاریخ انتشار 2011